Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Four-layer multiple kernel learning method based on random feature mapping
Yue YANG, Shitong WANG
Journal of Computer Applications    2022, 42 (1): 16-25.   DOI: 10.11772/j.issn.1001-9081.2021010171
Abstract378)   HTML24)    PDF (665KB)(158)       Save

Since there is no perfect theoretical basis for the selection of kernel function in single kernel network models, and the network node size of Four-layer Neural Network based on Randomly Feature Mapping (FRFMNN) is excessively large, a Four-layer Multiple Kernel Neural Network based on Randomly Feature Mapping (MK-FRFMNN) algorithm was proposed. Firstly, the original input features were transformed into randomly mapped features by a specific random mapping algorithm. Then, multiple basic kernel matrices were generated through different random kernel mappings. Finally, the synthetic kernel matrix formed by basic kernel matrices was linked to the output layer through the output weights. Since the weights of random mapping of original features were randomly generated according to the random continuous sampling probability distribution randomly, without the need of updates of the weights, and the weights of the output layer were quickly solved by the ridge regression pseudo inverse algorithm, thus avoiding the time-consuming training process of the repeated iterations. Different random weight matrices were introduced into the basic kernel mapping of MK-FRFMNN. the generated synthetic kernel matrix was able to not only synthesize the advantages of various kernel functions, but also integrate the characteristics of various random distribution functions, to obtain better feature selection and expression effect in the new feature space. Theoretical and experimental analyses show that, compared with the single kernel models such as Broad Learning System (BLS) and FRMFNN, MK-FRMFNN model has the node size reduced by about 2/3 with stable classification performance; compared with mainstream multiple kernel models, MK-FRMFNN model can learn large sample datasets, and has better performance in classification.

Table and Figures | Reference | Related Articles | Metrics
Open set fuzzy domain adaptation algorithm via progressive separation
Xiaolong LIU, Shitong WANG
Journal of Computer Applications    2021, 41 (11): 3127-3131.   DOI: 10.11772/j.issn.1001-9081.2021010061
Abstract439)   HTML10)    PDF (743KB)(135)       Save

The aim of domain adaptation is to use the knowledge in a labeled (source) domain to improve the model classification performance of an unlabeled (target) domain, and this method has achieved good results. However, in the open realistic scenes, the target domain usually contains unknown classes that are not observed in the source domain, which is called open set domain adaptation problem. For such challenging scene setting, the traditional domain adaptation algorithm is powerless. Therefore, an open set fuzzy domain adaptation algorithm via progressive separation was proposed. Firstly, based on the open set fuzzy domain adaptation algorithm with membership degree introduced, the method of separating the known and unknown class samples in the target domain step by step was explored. Then, only known classes separated from the target domain were aligned with the source domain, so as to reduce the distribution difference between the two domains and perform the fuzzy adaptation. The negative transfer effect caused by the mismatch between unknown and known classes was reduced well by the proposed algorithm. Six domain transformation experimental results on Office dataset show that, the accuracy of the proposed algorithm has the significant improvement in image classification compared with the traditional domain adaptation algorithm, and verify that the proposed algorithm can gradually enhance the accuracy and robustness of the domain adaptation classification model.

Table and Figures | Reference | Related Articles | Metrics